Goto

Collaborating Authors

 malicious device


FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective Jingwei Sun 1, Ang Li1, Louis DiValentin

Neural Information Processing Systems

Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks.



Undermining Federated Learning Accuracy in EdgeIoT via Variational Graph Auto-Encoders

Li, Kai, Hu, Shuyan, Wu, Bochun, Zou, Sai, Ni, Wei, Dressler, Falko

arXiv.org Artificial Intelligence

EdgeIoT represents an approach that brings together mobile edge computing with Internet of Things (IoT) devices, allowing for data processing close to the data source. Sending source data to a server is bandwidth-intensive and may compromise privacy. Instead, federated learning allows each device to upload a shared machine-learning model update with locally processed data. However, this technique, which depends on aggregating model updates from various IoT devices, is vulnerable to attacks from malicious entities that may inject harmful data into the learning process. This paper introduces a new attack method targeting federated learning in EdgeIoT, known as data-independent model manipulation attack. This attack does not rely on training data from the IoT devices but instead uses an adversarial variational graph auto-encoder (AV-GAE) to create malicious model updates by analyzing benign model updates intercepted during communication. AV-GAE identifies and exploits structural relationships between benign models and their training data features. By manipulating these structural correlations, the attack maximizes the training loss of the federated learning system, compromising its overall effectiveness.


A Trustworthy AIoT-enabled Localization System via Federated Learning and Blockchain

Wang, Junfei, Huang, He, Feng, Jingze, Wong, Steven, Xie, Lihua, Yang, Jianfei

arXiv.org Artificial Intelligence

There is a significant demand for indoor localization technology in smart buildings, and the most promising solution in this field is using RF sensors and fingerprinting-based methods that employ machine learning models trained on crowd-sourced user data gathered from IoT devices. However, this raises security and privacy issues in practice. Some researchers propose to use federated learning to partially overcome privacy problems, but there still remain security concerns, e.g., single-point failure and malicious attacks. In this paper, we propose a framework named DFLoc to achieve precise 3D localization tasks while considering the following two security concerns. Particularly, we design a specialized blockchain to decentralize the framework by distributing the tasks such as model distribution and aggregation which are handled by a central server to all clients in most previous works, to address the issue of the single-point failure for a reliable and accurate indoor localization system. Moreover, we introduce an updated model verification mechanism within the blockchain to alleviate the concern of malicious node attacks. Experimental results substantiate the framework's capacity to deliver accurate 3D location predictions and its superior resistance to the impacts of single-point failure and malicious attacks when compared to conventional centralized federated learning systems.


Learning to Backdoor Federated Learning

Li, Henger, Wu, Chen, Zhu, Sencun, Zheng, Zizhan

arXiv.org Artificial Intelligence

To this end, various defenses have been proposed recently, including training stage aggregation-based defenses and post-training mitigation defenses. While these defenses obtain reasonable performance against existing backdoor attacks, which are mainly heuristics based, we show that they are insufficient in the face of more advanced attacks. In particular, we propose a general reinforcement learning-based backdoor attack framework where the attacker first trains a (non-myopic) attack policy using a simulator built upon its local data and common knowledge on the FL system, which is then applied during actual FL training. Our attack framework is both adaptive and flexible and achieves strong attack performance and durability even under state-of-the-art defenses. Code is available at https://github.com/HengerLi/RLBackdoorFL. A backdoor attack against a deep learning model is one where a backdoor is embedded into the model at the training stage and is triggered at the test stage only for targeted data samples.


An Intelligent Trust Cloud Management Method for Secure Clustering in 5G enabled Internet of Medical Things

Yang, Liu, Yu, Keping, Yang, Simon X., Chakraborty, Chinmay, Lu, Yinzhi, Guo, Tan

arXiv.org Artificial Intelligence

5G edge computing enabled Internet of Medical Things (IoMT) is an efficient technology to provide decentralized medical services while Device-to-device (D2D) communication is a promising paradigm for future 5G networks. To assure secure and reliable communication in 5G edge computing and D2D enabled IoMT systems, this paper presents an intelligent trust cloud management method. Firstly, an active training mechanism is proposed to construct the standard trust clouds. Secondly, individual trust clouds of the IoMT devices can be established through fuzzy trust inferring and recommending. Thirdly, a trust classification scheme is proposed to determine whether an IoMT device is malicious. Finally, a trust cloud update mechanism is presented to make the proposed trust management method adaptive and intelligent under an open wireless medium. Simulation results demonstrate that the proposed method can effectively address the trust uncertainty issue and improve the detection accuracy of malicious devices.


FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective

Sun, Jingwei, Li, Ang, DiValentin, Louis, Hassanzadeh, Amin, Chen, Yiran, Li, Hai

arXiv.org Artificial Intelligence

Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation), have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and Non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks.


Robust Blockchained Federated Learning with Model Validation and Proof-of-Stake Inspired Consensus

Chen, Hang, Asif, Syed Ali, Park, Jihong, Shen, Chien-Chung, Bennis, Mehdi

arXiv.org Artificial Intelligence

Federated learning (FL) is a promising distributed learning solution that only exchanges model parameters without revealing raw data. However, the centralized architecture of FL is vulnerable to the single point of failure. In addition, FL does not examine the legitimacy of local models, so even a small fraction of malicious devices can disrupt global training. To resolve these robustness issues of FL, in this paper, we propose a blockchain-based decentralized FL framework, termed VBFL, by exploiting two mechanisms in a blockchained architecture. First, we introduced a novel decentralized validation mechanism such that the legitimacy of local model updates is examined by individual validators. Second, we designed a dedicated proof-of-stake consensus mechanism where stake is more frequently rewarded to honest devices, which protects the legitimate local model updates by increasing their chances of dictating the blocks appended to the blockchain. Together, these solutions promote more federation within legitimate devices, enabling robust FL. Our emulation results of the MNIST classification corroborate that with 15% of malicious devices, VBFL achieves 87% accuracy, which is 7.4x higher than Vanilla FL.